Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
J Med Imaging (Bellingham) ; 9(3): 034003, 2022 May.
Article in English | MEDLINE | ID: covidwho-1901880

ABSTRACT

Purpose: Rapid prognostication of COVID-19 patients is important for efficient resource allocation. We evaluated the relative prognostic value of baseline clinical variables (CVs), quantitative human-read chest CT (qCT), and AI-read chest radiograph (qCXR) airspace disease (AD) in predicting severe COVID-19. Approach: We retrospectively selected 131 COVID-19 patients (SARS-CoV-2 positive, March to October, 2020) at a tertiary hospital in the United States, who underwent chest CT and CXR within 48 hr of initial presentation. CVs included patient demographics and laboratory values; imaging variables included qCT volumetric percentage AD (POv) and qCXR area-based percentage AD (POa), assessed by a deep convolutional neural network. Our prognostic outcome was need for ICU admission. We compared the performance of three logistic regression models: using CVs known to be associated with prognosis (model I), using a dimension-reduced set of best predictor variables (model II), and using only age and AD (model III). Results: 60/131 patients required ICU admission, whereas 71/131 did not. Model I performed the poorest ( AUC = 0.67 [0.58 to 0.76]; accuracy = 77 % ). Model II performed the best ( AUC = 0.78 [0.71 to 0.86]; accuracy = 81 % ). Model III was equivalent ( AUC = 0.75 [0.67 to 0.84]; accuracy = 80 % ). Both models II and III outperformed model I ( AUC difference = 0.11 [0.02 to 0.19], p = 0.01 ; AUC difference = 0.08 [0.01 to 0.15], p = 0.04 , respectively). Model II and III results did not change significantly when POv was replaced by POa. Conclusions: Severe COVID-19 can be predicted using only age and quantitative AD imaging metrics at initial diagnosis, which outperform the set of CVs. Moreover, AI-read qCXR can replace qCT metrics without loss of prognostic performance, promising more resource-efficient prognostication.

2.
Eur Radiol ; 32(7): 4446-4456, 2022 Jul.
Article in English | MEDLINE | ID: covidwho-1707890

ABSTRACT

OBJECTIVES: We aimed to develop deep learning models using longitudinal chest X-rays (CXRs) and clinical data to predict in-hospital mortality of COVID-19 patients in the intensive care unit (ICU). METHODS: Six hundred fifty-four patients (212 deceased, 442 alive, 5645 total CXRs) were identified across two institutions. Imaging and clinical data from one institution were used to train five longitudinal transformer-based networks applying five-fold cross-validation. The models were tested on data from the other institution, and pairwise comparisons were used to determine the best-performing models. RESULTS: A higher proportion of deceased patients had elevated white blood cell count, decreased absolute lymphocyte count, elevated creatine concentration, and incidence of cardiovascular and chronic kidney disease. A model based on pre-ICU CXRs achieved an AUC of 0.632 and an accuracy of 0.593, and a model based on ICU CXRs achieved an AUC of 0.697 and an accuracy of 0.657. A model based on all longitudinal CXRs (both pre-ICU and ICU) achieved an AUC of 0.702 and an accuracy of 0.694. A model based on clinical data alone achieved an AUC of 0.653 and an accuracy of 0.657. The addition of longitudinal imaging to clinical data in a combined model significantly improved performance, reaching an AUC of 0.727 (p = 0.039) and an accuracy of 0.732. CONCLUSIONS: The addition of longitudinal CXRs to clinical data significantly improves mortality prediction with deep learning for COVID-19 patients in the ICU. KEY POINTS: • Deep learning was used to predict mortality in COVID-19 ICU patients. • Serial radiographs and clinical data were used. • The models could inform clinical decision-making and resource allocation.


Subject(s)
COVID-19 , Deep Learning , Humans , Intensive Care Units , Radiography , X-Rays
3.
Eur Radiol ; 31(11): 8775-8785, 2021 Nov.
Article in English | MEDLINE | ID: covidwho-1209506

ABSTRACT

OBJECTIVES: To investigate machine learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, interstitial lung disease (ILD) and normal CTs. METHODS: Our retrospective multi-institutional study obtained 2446 chest CTs from 16 institutions (including 1161 COVID-19 patients). Training/validation/testing cohorts included 1011/50/100 COVID-19, 388/16/33 ILD, 189/16/33 other pneumonias, and 559/17/34 normal (no pathologies) CTs. A metric-based approach for the classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. RESULTS: Most discriminative features of COVID-19 are the percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC = 0.83, sensitivity = 0.74, and specificity = 0.79 versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias, and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. CONCLUSIONS: Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and CTs with no pathologies, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance and, therefore, may be useful to facilitate diagnosis of COVID-19. KEY POINTS: • Unsupervised clustering reveals the key tomographic features including percent airspace opacity and peripheral and basal opacities most typical of COVID-19 relative to control groups. • COVID-19-positive CTs were compared with COVID-19-negative chest CTs (including a balanced distribution of non-COVID-19 pneumonia, ILD, and no pathologies). Classification accuracies for COVID-19, pneumonia, ILD, and CT scans with no pathologies are respectively 90%, 64%, 91%, and 94%. • Our deep learning (DL)-based classification method demonstrates an AUC of 0.93 (sensitivity 90%, specificity 83%). Machine learning methods applied to quantitative chest CT metrics can therefore improve diagnostic accuracy in suspected COVID-19, particularly in resource-constrained environments.


Subject(s)
COVID-19 , Humans , Machine Learning , Retrospective Studies , SARS-CoV-2 , Thorax
4.
Invest Radiol ; 56(8): 471-479, 2021 08 01.
Article in English | MEDLINE | ID: covidwho-1043316

ABSTRACT

OBJECTIVES: The aim of this study was to leverage volumetric quantification of airspace disease (AD) derived from a superior modality (computed tomography [CT]) serving as ground truth, projected onto digitally reconstructed radiographs (DRRs) to (1) train a convolutional neural network (CNN) to quantify AD on paired chest radiographs (CXRs) and CTs, and (2) compare the DRR-trained CNN to expert human readers in the CXR evaluation of patients with confirmed COVID-19. MATERIALS AND METHODS: We retrospectively selected a cohort of 86 COVID-19 patients (with positive reverse transcriptase-polymerase chain reaction test results) from March to May 2020 at a tertiary hospital in the northeastern United States, who underwent chest CT and CXR within 48 hours. The ground-truth volumetric percentage of COVID-19-related AD (POv) was established by manual AD segmentation on CT. The resulting 3-dimensional masks were projected into 2-dimensional anterior-posterior DRR to compute area-based AD percentage (POa). A CNN was trained with DRR images generated from a larger-scale CT dataset of COVID-19 and non-COVID-19 patients, automatically segmenting lungs, AD, and quantifying POa on CXR. The CNN POa results were compared with POa quantified on CXR by 2 expert readers and to the POv ground truth, by computing correlations and mean absolute errors. RESULTS: Bootstrap mean absolute error and correlations between POa and POv were 11.98% (11.05%-12.47%) and 0.77 (0.70-0.82) for average of expert readers and 9.56% to 9.78% (8.83%-10.22%) and 0.78 to 0.81 (0.73-0.85) for the CNN, respectively. CONCLUSIONS: Our CNN trained with DRR using CT-derived airspace quantification achieved expert radiologist level of accuracy in the quantification of AD on CXR in patients with positive reverse transcriptase-polymerase chain reaction test results for COVID-19.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods , Radiography, Thoracic , Radiologists , Tomography, X-Ray Computed , Cohort Studies , Humans , Lung/diagnostic imaging , Male , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL